Goto

Collaborating Authors

 training ai


Underage Workers Are Training AI

WIRED

Like most kids his age, 15-year-old Hassan spent a lot of time online. Before the pandemic, he liked playing football with local kids in his hometown of Burewala in the Punjab region of Pakistan. But Covid lockdowns made him something of a recluse, attached to his mobile phone. "I just got out of my room when I had to eat something," says Hassan, now 18, who asked to be identified under a pseudonym because he was afraid of legal action. From his childhood bedroom, the high schooler was working in the global artificial intelligence supply chain, uploading and labeling data to train algorithms for some of the world's largest AI companies.


These Prisoners Are Training AI

WIRED

Across a sterile white table in a windowless room, I'm introduced to a woman in her forties. She has a square jaw and blonde hair that has been pulled back from her face with a baby-blue scrunchie. "The girls call me Marmalade," she says, inviting me to use her prison nickname. Early on a Wednesday morning, Marmalade is here, in a Finnish prison, to demonstrate a new type of prison labor. The table is bare except for a small plastic bottle of water and an HP laptop.


'Very wonderful, very toxic': how AI became the culture war's new frontier

The Guardian

When Elon Musk introduced the team behind his new artificial intelligence company xAI last month, the billionaire entrepreneur took a question from the rightwing media activist Alex Lorusso. ChatGPT had begun "editorializing the truth" by giving "weird answers like that there are more than two genders", Lorusso posited. Was that a driver behind Musk's decision to launch xAI, he wondered. "I do think there is significant danger in training AI to be politically correct, or in other words training AI to not say what it actually thinks is true," Musk replied. His own company's AI on the other hand, would be "maximally true" he had said earlier in the presentation.


Ground Truth Or Dare: Factors Affecting The Creation Of Medical Datasets For Training AI

Zając, Hubert D., Avlona, Natalia R., Andersen, Tariq O., Kensing, Finn, Shklovski, Irina

arXiv.org Artificial Intelligence

One of the core goals of responsible AI development is ensuring high-quality training datasets. Many researchers have pointed to the importance of the annotation step in the creation of high-quality data, but less attention has been paid to the work that enables data annotation. We define this work as the design of ground truth schema and explore the challenges involved in the creation of datasets in the medical domain even before any annotations are made. Based on extensive work in three health-tech organisations, we describe five external and internal factors that condition medical dataset creation processes. Three external factors include regulatory constraints, the context of creation and use, and commercial and operational pressures. These factors condition medical data collection and shape the ground truth schema design. Two internal factors include epistemic differences and limits of labelling. These directly shape the design of the ground truth schema. Discussions of what constitutes high-quality data need to pay attention to the factors that shape and constrain what is possible to be created, to ensure responsible AI design.


OpenAI strikes deal with AP to pay for using its news in training AI

Washington Post - Technology News

Now, a rising group of authors, musicians, news organizations and social media companies has been pushing back, arguing that the use of their content to train AI is a massive shift in the way the internet works, especially since some of the AI tools being trained on human-made content are already being used to replace human workers. A wave of lawsuits has washed over the industry in the past two weeks alleging improper data use, including class-action suits against OpenAI and Google, and lawsuits against OpenAI from the comedian Sarah Silverman and two prominent fiction authors.


The Download: inaccurate welfare algorithms, and training AI for free

MIT Technology Review

The news: An algorithm funded by the World Bank to determine which families should get financial assistance in Jordan likely excludes people who should qualify, an investigation from Humans Rights Watch has found. Why it matters: The organization identified several fundamental problems with the algorithmic system that resulted in bias and inaccuracies. It ranks families applying for aid from least poor to poorest using a secret calculus that assigns weights to 57 socioeconomic indicators. Applicants say that the calculus is not reflective of reality, and oversimplifies people's economic situation. The bigger picture: AI ethics researchers are calling for more scrutiny around the increasing use of algorithms in welfare systems.


Elon Musk reportedly building team to develop ChatGPT alternative

#artificialintelligence

Amid concerns about the neutrality of OpenAI's text-based artificial intelligence (AI) platform ChatGPT, Tesla (NASDAQ: TSLA) CEO Elon Musk seems to be busy working on creating an alternative to the high-profile chatbot as he has reportedly approached AI researchers in recent weeks. Indeed, Musk has allegedly been recruiting Igor Babuschkin, a researcher who has recently left Alphabet's (NASDAQ: GOOGL) DeepMind AI unit and specialized in the machine-learning models used by the likes of ChatGPT, according to a report by The Information published on February 27. Specifically, the report referred to the media outlet's communication with two unnamed people that are said to have direct knowledge of the team-assembling efforts, as well as a third person who was briefed on the conversations between Elon and Babuschkin. As the report recalls, Musk has been critical of OpenAI, which he co-founded in 2015 but has since cut ties with, for installing safeguards that prevent ChatGPT from producing text that might offend specific groups of users, suggesting in 2022 that the technology was an example of "training AI to be woke." More recently, he joked that "what we need is TruthGPT," which led to the appearance of an eponymous project that stated it was already developing such a bot using the technology underlying the cryptocurrency industry and asking for Musk's assistance. Despite criticism, crypto trading platform Binance has praised ChatGPT over its potential to be used in crypto adoption, expansion, and education as it is able to explain complicated concepts, such as proof-of-work (PoW), Bitcoin mining, and others, in a conversational and often fun way, like through a rap song or imitating a 1920s mobster.


Artificial Intelligence Computing Using Networks of Tiny Nanomagnets

#artificialintelligence

Researchers have demonstrated that artificial intelligence may be performed using small nanomagnets that interact like neurons in the brain. Researchers have shown it is possible to perform artificial intelligence using tiny nanomagnets that interact like neurons in the brain. The new technology, developed by a team led by Imperial College London researchers, could significantly reduce the energy cost of artificial intelligence (AI), which is currently doubling globally every 3.5 months. In a paper published today (May 5, 2022) in the journal Nature Nanotechnology, the international team has produced the first proof that networks of nanomagnets can be used to perform AI-like processing. The researchers showed nanomagnets can be used for'time-series prediction' tasks, such as predicting and regulating insulin levels in diabetic patients.


'Nanomagnetic' computing can provide low-energy AI

#artificialintelligence

The new method, developed by a team led by Imperial College London researchers, could slash the energy cost of artificial intelligence (AI), which is currently doubling globally every 3.5 months. In a paper published today in Nature Nanotechnology, the international team have produced the first proof that networks of nanomagnets can be used to perform AI-like processing. The researchers showed nanomagnets can be used for'time-series prediction' tasks, such as predicting and regulating insulin levels in diabetic patients. Artificial intelligence that uses'neural networks' aims to replicate the way parts of the brain work, where neurons talk to each other to process and retain information. A lot of the maths used to power neural networks was originally invented by physicists to describe the way magnets interact, but at the time it was too difficult to use magnets directly as researchers didn't know how to put data in and get information out.


Debate continues over the pros and cons of regulating artificial intelligence

#artificialintelligence

What are the issues of most concern for businesses in the EU Commission's recently published AI Act proposals? Our virtual gathering included representatives from the UK, Netherlands and USA, stretching across the automotive, energy, education, professional services and tech sectors. As with our first AI roundtable, the discussion ranged far and wide. A notable difficulty with the Commission's draft regulation on AI (as proposed, its "AI Act") is that it assumes that an end-to-end "provider" of an AI system can be identified and fixed with liability. The AI Act defines such service providers as the person or organisation that developed the system or had it developed.